perm filename DENNET[F88,JMC]1 blob
sn#861870 filedate 1988-10-09 generic text, type C, neo UTF8
COMMENT ā VALID 00006 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 dennet[f88,jmc] Review of The Intentional Stance and Elbow Room for AI
C00006 00003 {\it Elbow Room} is concerned with the problem of what there is
C00011 00004 Some philosophers claim that philosophy is just the sum of the
C00012 00005 \noindent References:
C00013 00006 Notes:
C00014 ENDMK
Cā;
dennet[f88,jmc] Review of The Intentional Stance and Elbow Room for AI
AI has in common with philosophy topics in metaphysics,
ontology and epistemology. Philosophers have put more than 2,000
years of work into these matters, so it behooves AI researchers
to consider what they have discovered that we might be able to use.
Also we can hope to get more useful work out of philosophers
in the future, especially if we can persuade some of them to consider
some of their traditional topics in a more concrete way and
to use formalisms definite enough to be subsequently adaptable to AI use.
Among contemporary philosophers, Daniel Dennett is one of the
closest to the AI community in attitude, but there are important
differences, copiously illustrated in the above-mentioned books,
that make his work less useful to us than it might otherwise be.
Part of the problem is that philosophers have many objectives
quite different from the scientific and engineering objectives of
AI research. However, there is a lot in common, and attention
to certain more concrete matters suggested by AI may help them
achieve even purely philosophical objectives.
What AI has in common with philosophy is a lot more than what
physics has in common with what the philosophers call ``philosophy of
physics'' or what mathematics has in common with the philosophy of
mathematics. A general AI system, e.g. a mobile robot, needs a
general view of the world into which particular information is fitted
and actions are planned. This overlaps metaphysics. It must have
views about what it and other entities know and can know. This is
epistemology. The variables in its programs and internal databases
must range over certain classes of entity, and this is ontology. It
must make computations about which future states are achievable and
which are not, and this is related to the problem of free will.
{\it Elbow Room} is concerned with the problem of what there is
in the way of free will. As a philosopher, Dennett is primarily
concerned with human free will, although he mentions machines.
He also asks what kind of free will people would want to have.
It seems that philosophers divide themselves into
``compatibilists'' and ``incompatibilists'' when they consider free
will. The compatibilists hold that free will and determinism are
compatible and incompatibilists hold them to be incompatible and often
look at quantum mechanics as providing a way out. We roboticists had
better build our robots to be compatibilists --- at least with regard
to their concrete actions. For example, suppose we want a robot to
paint a step ladder and the ceiling (McDermott 1982). Suppose it
said,
{\narrow ``Well, I could paint the ceiling first or I could paint
the step ladder first. Whoops! Wait a minute! I'm a robot and
a deterministic device, at least when I function correctly. Therefore, it's
meaningless to say that I can do either. I will do whichever of
the two I am fated to do.''}
Instead, we want the robot to use the word ``can'' (or rather
a suitably corresponding term in its internal language) in such a way
that it will infer that it can paint the ceiling and the step ladder
in either order and then decide that painting the ceiling first
gives a better result, since it avoids getting paint on the
robot's feet.
(McCarthy and Hayes 1969) discusses {\it can} concretely in
terms of systems of finite automata. We define sentences like ``In a
certain initial configuration, automaton 1 can put automaton 3 in
state 7 by time 10, but it won't''. The above sentence is considered
true if there is a sequence of signals along the output lines of
automaton 1 that would put automaton 3 in state 7 by time 10, but the
actual sequence of signals emitted by automaton 1 in the given initial
configuration does not have this effect. Thus the question about what
automaton 1 can do becomes a question about an automaton system that
has external inputs replacing the outputs of automaton 1. When we
want to define {\it can} for a set of initial states, the problem is
more complex and is also treated in the 1969 paper. Unfortunately,
Dennett doesn't reach this level of concreteness in this direction or
any other.
If a robot uses this notion of {\it can}, it avoids paradox,
because it doesn't have to predict its own actions. Moreover,
it gets the same kind of analysis of its possibilities that
people get when they analyze similar situations.
Some philosophers claim that philosophy is just the sum of the
separate sciences. One can't be sure about philosophy, but AI
requires more --- unless the study of the common sense world is to be
considered one of the sciences. If so, it has an epistemological
character quite different from the usual sciences.
\noindent References:
{\bf McDermott, D. (1982)}: ``A temporal logic for reasoning about
processes and plans'', {\it Cognitive Science} , 6, pp. 101-155.
Notes:
Dennett's good ideas.
1. Dennett's 3 stances and their relations. The intentional stance
isn't just an abbreviation of the physical stance. The battle over
the thermostat.
2. Intuition pumps.